Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Stellantis says it overestimated the EV transition and is shifting back to hybrids, V8s, and what customers actually want:
Stellantis took a €22.2 billion ($26.25 billion) write-down last year, tied largely to scaling back electric vehicle programs. But buried inside the numbers is a much bigger message: the company openly acknowledged it moved faster than customers were ready to follow. According to Stellantis and Reuters, the automaker is now rebuilding its strategy around real-world demand rather than aggressive electrification targets.
CEO Antonio Filosa was unusually direct in Stellantis's announcement, saying the company "over-estimated the pace of the energy transition" and allowed their pre-planned strategy to overpower what buyers actually want. The result was billions written off in canceled EV products, impaired electric platforms, and downsized battery operations. Keep in mind, Stellantis had once aimed for electric vehicles to make up 50% of U.S. sales and all European sales by 2030, despite EV adoption in America sitting at 7%.
That disconnect is now being corrected, with Stellantis shifting capital back toward hybrids and internal combustion models that align more closely with consumer wants. And it seems other automakers have the same idea in mind, with even Porsche rumored to abandon the all-electric 718. To add fuel to the fire, there are countless players in the EV segment nowadays, with Chinese automakers seeming to lead the pack. Pursuing a profitable full-electric approach has become more difficult than ever before.
Previously:
Beginning in March, all accounts will have a 'teen-appropriate experience by default'
Discord announced on Monday that it's rolling out age verification on its platform globally starting next month, when it will automatically set all users' accounts to a "teen-appropriate" experience unless they demonstrate that they're adults.
"For most adults, age verification won't be required, as Discord's age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities. Discord does not use private messages or any message content in this process," Savannah Badalich, Discord's global head of product policy, tells The Verge.
Users who aren't verified as adults will not be able to access age-restricted servers and channels, won't be able to speak in Discord's livestream-like "stage" channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.
RelatedDirect messages and servers that are not age-restricted will continue to function normally, but users won't be able to send messages or view content in an age-restricted server until they complete the age check process, even if it's a server they were part of before age verification rolled out. Badalich says those servers will be "obfuscated" with a black screen until the user verifies they're an adult. Users also won't be able to join any new age-restricted servers without verifying their age.
[...] If Discord's age inference model can't determine a user's age, a government ID might still be required for age verification in its global rollout. According to Discord, to remove the new "teen-by-default" changes and limitations, "users can choose to use facial age estimation or submit a form of identification to [Discord's] vendor partners, with more options coming in the future."
The first option uses AI to analyze a user's video selfie, which Discord says never leaves the user's device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents "are deleted quickly — in most cases, immediately after age confirmation."
Badalich also says after the October data breach, Discord "immediately stopped doing any sort of age verification flows with that vendor" and is now using a different third-party vendor. She adds, "We're not doing biometric scanning [or] facial recognition. We're doing facial estimation. The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information."
[...] Even so, there's still a risk that some users will leave Discord as a result of the age verification rollout. "We do expect that there will be some sort of hit there, and we are incorporating that into what our planning looks like," Badalich says. "We'll find other ways to bring users back."
Or it might just be destroyed by the Sun. It's a tough call:
It's been a while since we've had a Great Comet in the sky, something bright and visible for many. Currently, no object appears to fit the bill for 2026, but a couple of comets have a chance to become bright enough to be visible to the naked eye this April. In fact, a newly discovered Kreutz sungrazer has a very good chance of doing that.
The object is known as C/2026 A1 (MAPS), discovered extremely recently on January 20 by a group of French amateur astronomers using the AMACS1 Observatory in the Atacama Desert, Chile. It has been traced back to the Kreutz comet group, a group that has some of the brightest comets ever seen, like the Great Comet of 1843.
Like other members of this group, it comes from below the plane of the Solar System. It will have its perihelion, its closest approach to the Sun, on April 4. At perihelion, the comet will be just 810,000 kilometers (about 500,000 miles) from our star. In comparison, interstellar comet 3I/ATLAS's perihelion in October 2025 saw it fly around 200 million kilometers (124 million miles) from the Sun.
Sungrazing comets can become very bright for quite a while, or very bright for a very short time, or just get ripped apart. We'll have to wait and see. It is already a record-breaker, however. No inbound Kreutz comet has ever been spotted so far from the Sun with such a long lead-in time (11.5 weeks) before reaching perihelion.
"It's moving on an orbit typical of Kreutz sungrazing comets, and already holds one record. At the time of its discovery, comet MAPS was farther from the Sun than any previous newly discovered sungrazer," Jonti Horner wrote in The Conversation. "That suggests it might be a larger-than-usual fragment—perhaps."
The previous record holder was Comet Ikeya–Seki, another Kreutz sungrazer, which passed almost half as close and was so bright it was even visible during the day. It was discovered one month before its perihelion in 1965. It is one of the brightest in a millennium and definitely the brightest of the 20th century. Comet Ikeya-Seki was also very large and still broke apart into three pieces following the encounter with the Sun. This new comet is unlikely to be this large.
Comet MAPS is currently expected to become almost as bright as Venus as it passes by the Sun. That is obviously a very bright comet, but it doesn't mean it will be a more classical bright comet. Millennials and older folks may remember that in 1996 and 1997, the sky blessed us with two brightly visible comets: Hyakutake and Hale-Bopp. It's unlikely to look like either of them.
In the aftermath of its close encounter, the comet will be visible more favorably in the Southern Hemisphere. It will definitely be visible for solar observatories like SOHO, so we should get some good images.
You might remember we said there were two comets of interest. If Comet MAPS doesn't pan out, there's always the chance that Comet C/2025 R3 (PanSTARRS) will be very bright after it reaches perihelion on April 19.
After years of bolting AI onto everything, Redmond remembers admins exist:
There is good news for administrators: Microsoft has delivered on its promise to build Sysmon functionality into Windows.
The functionality arrived in the Dev and Beta Windows Insider channels this week in builds 26300.7733 and 26220.7752, respectively. It allows administrators to capture system events via custom configuration files, filter for specific events, and write them to the standard Windows event log for pickup by third-party applications, including security tools.
Sysmon, part of the Sysinternals toolset, has long been useful for monitoring Windows' internals. Mark Russinovich, Microsoft technical fellow and co-founder of Winternals, from whence Sysinternals (and Sysmon) sprang, said: "It helps in detecting credential theft, uncovering stealthy lateral movement, and powering forensic investigations.
"Its granular diagnostic data feeds security information and event management (SIEM) pipelines and enables defenders to spot advanced attacks."
But deployment has been painful for administrators, managing potentially thousands of endpoints across an enterprise that need to be kept. Russinovich noted "a lack of official customer support for Sysmon in production environments."
Having it built in (though disabled by default) is therefore welcome, a respite from Microsoft's relentless AI integrations across its portfolio.
Enabling it requires some work with PowerShell, which shouldn't trouble Sysmon-savvy users. Microsoft notes that any existing Sysmon installation must be uninstalled first before the built-in version can be enabled.
After a month of patches that Microsoft would rather forget, Sysmon's arrival is a genuinely positive update.
When reading my local newspaper online this morning, I noticed for the first time a small message, lower-left of the window, "Opt-Out Signal Honored". A little quick searching turned up GPC, Global Privacy Control https://globalprivacycontrol.org/
The GPC signal is intended to communicate a Do Not Sell or Share request under the California Consumer Privacy Act, and similar state privacy laws that allow users to opt out of data sales or the use of their data for cross-context targeted advertising. Under the GDPR, the intent of the GPC signal is to convey a general request that data controllers limit the sale or sharing of the user's personal data to other data controllers (GDPR Articles 7 & 21). The GPC may also invoke other compatible rights in other jurisdictions.
A little more digging shows that SN covered this in late 2020 (five+ years ago), https://soylentnews.org/article.pl?sid=20/10/08/0119236 but at that time it was in the FSF Privacy Badger--which I was already using back then.
So my first thought is that all of a sudden my state (not California) has turned on a similar rule. But then, my partner's Win11 PC (no Privacy Badger) gave the same message on a consumer company catalog page (from yet a third state)--so maybe the message is coming from somewhere else? We do both use Firefox, but I'm on ESR (for Win7) and they are on the main track.
Why did the message start to appear today? Does GPC actually work? Any relation to the European GDPR?
The campaign allegedly cost $15 million for the ads, $70 million for the domain name.
AI.com bought its way onto the biggest advertising stage in the world on Sunday night, running a fourth-quarter Super Bowl ad spot that told tens of millions of sports fans worldwide to head to the site and create a handle. Hyped-up viewers arrived in droves, and then the site crashed.
Within minutes of the ad airing, users across social platforms reported that AI.com was either unreachable or stuck in failed sign-up loops, turning what was meant to be the site's big launch moment into an unexpected stress test that failed right before the eyes of millions. The company soon restored its service, but first impressions count.
In a post on X.com, co-founder and CEO Kris Marszalek, best known as the CEO of Crypto.com, said that the company had "prepared for scale, but not for THIS," later attributing the disruption to external factors outside the company's control. Marszalek later wrote that the website was "hitting Google rate limits (which are at their absolute global maximum)."
Linus Torvalds Confirms The Next Kernel Is Linux 7.0:
Following Linus Torvalds releasing Linux 6.19 stable, Linus Torvalds is now out with his customary release announcement. Notably he officially confirmed that the next kernel version is Linux 7.0 as the successor to Linux 6.19.
Linus Torvalds wrote in the Linux 6.19 release announcement:
"I have more than three dozen pull requests for when the merge window opens tomorrow - thank you to all the early maintainers. And as people have mostly figured out, I'm getting to the point where I'm being confused by large numbers (almost running out of fingers and toes again), so the next kernel is going to be called 7.0."
So it's on to the Linux 7.0 kernel cycle kicking off tomorrow. The Linux 7.0 merge window will run the next two weeks. Linux 7.0 stable will be out in mid-April as the kernel version also squeezing into Ubuntu 26.04 LTS.
There are a lot of exciting changes on the table for Linux 7.0.
Which also means, as Michael Larabel mentions in the above:
Linux 6.19 Released With Better Support For Older AMD GPUs, DRM Color Pipeline API:
As anticipated due to the extra week for the cycle given end of year holidays, Linus Torvalds today released the Linux 6.19 stable kernel as the first major release of 2026. There is a lot in store with this early 2026 kernel release.
Linux 6.19 as usual is especially heavy on Intel and AMD changes including AMD GCN 1.0 / GCN 1.1 dGPUs now defaulting to the AMDGPU driver rather than Radeon legacy driver for better performance, RADV compatibility out-of-the-box, etc. On the Intel side there is more enablement work for Wildcat Lake and Nova Lake platforms. Plus Intel Linear Address Space Separation (LASS) and Content Adaptive Sharpness Filter (CASF) are among the new features enabled. Linux 6.19 also mainlines the DRM Color Pipeline API backed by Valve, various file-system improvements, the ASUS Armoury and Uniwill platform drivers, and much more.
See the Linux 6.19 feature overview for a more extensive look at the changes of this new kernel.
If you're looking to protect your privacy while using any of the best iPhones, one of the most effective things you can do is limit how your location data is used. And with iOS 26.3 just around the corner, you'll soon have another way to keep your private data under lock and key.
That's because Apple is about to introduce a new feature in iOS 26.3 called Limit Precise Location. As the name implies, this is designed to reduce the information that can be gleaned from your location, and instead provides much more vague data to cellular providers.
In a new support document on Apple's website, the company outlines how Limit Precise Location works. After explaining that your location can be pinpointed based on the cell towers your phone connects to, Apple says its new setting restricts the information that's available to carriers in this way. That might mean they can only determine the rough neighborhood where you are located, for example, rather than a precise street address.
Apple also notes that this new feature does not limit "signal quality or user experience," and it also doesn't hinder first responders, as they can still see your exact location during an emergency.
In order to use it, you'll need to open the Settings app and tap Mobile Service Mobile Data Options, then enable the toggle next to Limit Precise Location. Your device needs to be restarted whenever you enable or disable this feature.
It's worth noting that this new feature comes with some conditions. For one thing, Apple says you need to have an iPhone Air, iPhone 16e, or iPad Pro with M5 chip and Wi-Fi plus cellular connectivity in order for the feature to work.
Your phone must also be running on a compatible network, as detailed below:
- Germany: Telekom
- United Kingdom: EE, BT
- United States: Boost Mobile
- Thailand: AIS, True
[...] All of this means the new feature is having a somewhat limited rollout for the time being. But as more Apple devices start to use the company's C1 and C1X modems – the ones outfitted in the compatible phones listed earlier – this kind of privacy-preserving tool should become the norm for Apple fans. And that's great news for anyone who wants to guard their privacy just a little more securely.
The window to patch vulnerabilities is shrinking rapidly:
Russian-state hackers wasted no time exploiting a critical Microsoft Office vulnerability that allowed them to compromise the devices inside diplomatic, maritime, and transport organizations in more than half a dozen countries, researchers said Wednesday.
The threat group, tracked under names including APT28, Fancy Bear, Sednit, Forest Blizzard, and Sofacy, pounced on the vulnerability, tracked as CVE-2026-21509, less than 48 hours after Microsoft released an urgent, unscheduled security update late last month, the researchers said. After reverse-engineering the patch, group members wrote an advanced exploit that installed one of two never-before-seen backdoor implants.
The entire campaign was designed to make the compromise undetectable to endpoint protection. Besides being novel, the exploits and payloads were encrypted and ran in memory, making their malice hard to spot. The initial infection vector came from previously compromised government accounts from multiple countries and were likely familiar to the targeted email holders. Command and control channels were hosted in legitimate cloud services that are typically allow-listed inside sensitive networks.
"The use of CVE-2026-21509 demonstrates how quickly state-aligned actors can weaponize new vulnerabilities, shrinking the window for defenders to patch critical systems," the researchers, with security firm Trellix, wrote. "The campaign's modular infection chain—from initial phish to in-memory backdoor to secondary implants was carefully designed to leverage trusted channels (HTTPS to cloud services, legitimate email flows) and fileless techniques to hide in plain sight."
The 72-hour spear phishing campaign began January 28 and delivered at least 29 distinct email lures to organizations in nine countries, primarily in Eastern Europe. Trellix named eight of them: Poland, Slovenia, Turkey, Greece, the UAE, Ukraine, Romania, and Bolivia. Organizations targeted were defense ministries (40 percent), transportation/logistics operators (35 percent), and diplomatic entities (25 percent).
[...] Trellix attributed the campaign to APT28 with "high confidence" based on technical indicators and the targets selected. Ukraine's CERT-UA has also attributed the attacks to UAC-0001, a tracking name that corresponds to APT28.
"APT28 has a long history of cyber espionage and influence operations," Trellix wrote. "The tradecraft in this campaign—multi-stage malware, extensive obfuscation, abuse of cloud services, and targeting of email systems for persistence—reflects a well-resourced, advanced adversary consistent with APT28's profile. The toolset and techniques also align with APT28's fingerprint."
Trellix has provided a comprehensive list of indicators organizations can use to determine if they have been targeted.
https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
The IBM mainframe computer has evolved over a period spanning almost six decades and in part, this has been in response to wider industry trends, with notably the advent of "midrange" and personal computers, and the sweeping success of TCP/IP. However, the mainframe has also been responsible for delivering features and functionality which would only come much later to smaller systems, not to mention enduring and enviable reliability which is hard to beat today.
This post takes a look at the IBM 3270 Information Display System, which played a key role in enabling a single mainframe computer to scale and serve thousands of users. It should also be noted that, while discussing the system mostly in the past tense, the mainframe itself very much lives on and actually so does 3270, albeit nowadays as a protocol that is run on top of TCP/IP.
Five other states have introduced similar bills recently as data center development skyrockets:
On Friday, New York State Senators Liz Krueger and Kristen Gonzales introduced a bill that would stop the issuance of permits for new data centers for at least three years and ninety days to give time for impact assessments and to update regulations. The bill would require the Department of Environmental Conservation and Public Service Commissions to issue impact statements and reports during the pause, along with any new orders or regulations that they deem necessary to minimize data centers' impacts on the environment and consumers in New York.
The bill would require these departments to study data centers' water, electricity and gas usage, and their impact on the rates of these resources, among other things. The bill, citing a Bloomberg analysis, notes that, "Nationally, household electricity rates increased 13 percent in 2025, largely driven by the development of data centers." New York is the sixth state this year to introduce a bill aiming to put the brakes on data centers, following in the footsteps of Georgia, Maryland, Oklahoma, Vermont and Virginia, according to Wired. It's still very much in the early stages, and is now with the Senate Environmental Conservation Committee for consideration.
Researchers from Adelaide University worked with the National Institute of Standards and Technology (NIST) in the United States and the National Physical Laboratory (NPL) in the United Kingdom to review the future of the next generation of timekeeping.
They found that development is happening at such a fast rate that optical atomic clocks are well positioned to become the gold standard for timekeeping within the next few years, provided some technical challenges can be addressed.
"Optical atomic clocks have advanced rapidly over the past decade, to the point where they are now one of the most precise measurement tools ever built. They're more accurate than the best microwave atomic clocks and can even work outside the lab – this is a place that conventional atomic clocks have trouble venturing," said co-author Professor Andre Luiten from Adelaide University's Institute for Photonics and Advanced Sensing.
Optical atomic clocks are made from laser-cooled trapped ions and atoms. When scientists repeatedly probe the atoms with a laser, they respond only at a special frequency which can be converted into ticks to track time accurately.
The review into the next generation technology, which has been published in the journal Optica, outlines the key features, progress that's been made over the past decade, challenges and future applications.
"A decade ago, optical atomic clocks had no impact on the steering of international time. Today, at least ten have been approved for use," said Professor Luiten.
A roadmap for redefining how the second is measured is underway, but researchers have noted other potential uses for optical atomic clocks, including as gravity sensors that can aid in creating an international height reference system that's not based on sea level. Their precision and sensitivity also positions them as a useful tool for testing fundamental physics such as dark matter.
They could be relied on to maintain accurate time during satellite outages caused by solar storms or malicious attacks. This latter opportunity is seeing an outpouring of commercial interest in optical clocks, including from Adelaide University spin-out, QuantX Labs.
Despite the rapid development of this technology, the review does identify several key challenges. These include limitations to the operational capability of optical atomic clocks, with many still operating intermittently. Decisions around how to redefine the second also need to be made, including if a single type of optical atomic clock or a group are the most reliable way to replace caesium fountain clocks, with direct comparisons needed.
Supply chains for critical components are also underdeveloped, resulting in higher costs, however, researchers believe progress in quantum computing and bioscience are likely to lead to more affordable and accessible systems in the future.
Journal Reference: https://doi.org/10.1364/OPTICA.575770
Vibe Coding Is Killing Open Source Software, Researchers Argue:
According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it's happening faster than anyone predicted.
Thanks to vibe coding, a colloquialism for the practice of quickly writing code with the assistance of an LLM, anyone with a small amount of technical knowledge can churn out computer code and deploy software, even if they don't fully review or understand all the code they churn out. But there's a hidden cost. Vibe coding relies on vast amounts of open-source software, a trove of libraries, databases, and user knowledge that's been built up over decades.
Open-source projects rely on community support to survive. They're collaborative projects where the people who use them give back, either in time, money, or knowledge, to help maintain the projects. Humans have to come in and fix bugs and maintain libraries.
Vibe coders, according to these researchers, don't give back.
The study Vibe Coding Kills Open Source, takes an economic view of the problem and asks the question: is vibe coding economically sustainable? Can OSS survive when so many of its users are takers and not givers? According to the study, no.
"Our main result is that under traditional OSS business models, where maintainers primarily monetize direct user engagement...higher adoption of vibe coding reduces OSS provision and lowers welfare," the study said. "In the long-run equilibrium, mediated usage erodes the revenue base that sustains OSS, raises the quality threshold for sharing, and reduces the mass of shared packages...the decline can be rapid because the same magnification mechanism that amplifies positive shocks to software demand also amplifies negative shocks to monetizable engagement. In other words, feedback loops that once accelerated growth now accelerate contraction."
[...] According to Koren, vibe-coders simply don't give back to the OSS communities they're taking from. "The convenience of delegating your work to the AI agent is too strong. There are some superstar projects like Openclaw that generate a lot of community interest but I suspect the majority of vibe coders do not keep OSS developers in their minds," he said. "I am guilty of this myself. Initially I limited my vibe coding to languages I can read if not write, like TypeScript. But for my personal projects I also vibe code in Go, and I don't even know what its package manager is called, let alone be familiar with its libraries."
The study said that vibe coding is reducing the cost of software development, but that there are other costs people aren't considering. "The interaction with human users is collapsing faster than development costs are falling," Koren told 404 Media. "The key insight is that vibe coding is very easy to adopt. Even for a small increase in capability, a lot of people would switch. And recent coding models are very capable. AI companies have also begun targeting business users and other knowledge workers, which further eats into the potential 'deep-pocket' user base of OSS."
This won't end well. "Vibe coding is not sustainable without open source," Koren said. "You cannot just freeze the current state of OSS and live off of that. Projects need to be maintained, bugs fixed, security vulnerabilities patched. If OSS collapses, vibe coding will go down with it. I think we have to speak up and act now to stop that from happening."
He said that major AI firms like Anthropic and OpenAI can't continue to free ride on OSS or the whole system will collapse. "We propose a revenue sharing model based on actual usage data," he said. "The details would have to be worked out, but the technology is there to make such a business model feasible for OSS."
[...] "Popular libraries will keep finding sponsors," Koren said. "Smaller, niche projects are more likely to suffer. But many currently successful projects, like Linux, git, TeX, or grep, started out with one person trying to scratch their own itch. If the maintainers of small projects give up, who will produce the next Linux?"
arXiv link: https://arxiv.org/abs/2601.15494
CIO published a very interesting article about how the use of AI by the best engineers actually is slowing them down, and quite not delivering on the promised speed up of production code:
We've all heard the pitch. By now, it's practically background noise in every tech conference: AI coding is solved. We are told that large language models (LLMs) will soon write 80% of all code, leaving human engineers to merely supervise the output.
For a CIO, this narrative is quite seductive. It promises a massive drop in the cost of software production while increasing the engineering speed. It suggests that the bottleneck of writing code is about to vanish.
But as someone who spends his days building mission-critical financial infrastructure and autonomous agent platforms, I have to be the bearer of bad news: it's not working out that way. At least, not for your best engineers.
The deployment of AI copilots into the workflows of experienced engineers isn't producing the frictionless acceleration promised in the brochures. Instead, I'm seeing the emergence of a productivity trap — a hidden tax on velocity that is disproportionately hitting your most valuable technical talent.
[...]
For the first few years of the generative AI boom, we operated on vibes. We had anecdotal evidence and vendor-sponsored studies claiming massive productivity gains. And for junior developers working on simple tasks, those gains were real. If you just need a basic react component for a login button, using AI feels like a miracle.
But we got a reality check in mid-2025. A randomized controlled trial by METR (Model Evaluation & Threat Research) analyzed the impact on senior engineering talent. Unlike previous studies that used toy problems, this one watched experienced developers working on their own mature codebases — the kind of messy, complex legacy systems that actually power your business.
The results were stark. When experienced developers used AI tools to complete real-world maintenance tasks, they took 19% longer than when they worked without them.
[...]
It comes down to what I call the illusion of velocity. In the study, developers felt faster. They predicted the AI would save them huge amounts of time. Even after they finished — and were objectively recorded as being slower — they still believed the AI had been a timesaver.
The AI gives you a dopamine hit. Text appears on the screen at superhuman speed and the blank page problem vanishes. But the engineer's role has shifted from being a creator to being a reviewer and that is where the trap snaps shut.
According to the 2025 Stack Overflow Developer Survey, the single greatest frustration for developers is dealing with AI solutions that look correct but are slightly wrong. Nearly half of developers explicitly stated that debugging AI-generated code takes more time than writing it themselves.
In software engineering, blatantly broken code is fine. The compiler screams, the app crashes upon launch, the red squiggly lines appear. You know it's wrong immediately.
Almost-right code is insidious. It compiles. It runs. It passes the basic unit tests. But it contains subtle logical flaws or edge-case failures that aren't immediately obvious.
When I use an AI, I am forced into reverse-engineering. I get a block of code I didn't write. I have to read it, decipher the intent of the model and then map that intent against the requirements of my system.
I saw this firsthand when building financial systems for enterprise logistics. The logic required to calculate net revenue was sophisticated with bespoke business rules. If I asked an LLM to generate the billing code, it would give me something that looked mathematically perfect. It would sum the line items correctly.
[...]
There is also the cost of context switching. Deep work, or flow state, is the essence of high-level engineering. It takes time to load the context of a distributed system into your brain.
AI tools, in their current chat-based forms, encourage a fragmented workflow. You stop coding, you prompt the bot, you wait, you review, you reject, you re-prompt. The flow is gone.
[...]
So, if the current copilot model is a trap for your best talent, what do we do? We certainly don't ban AI. That would be like banning calculators because you sometimes hit the wrong button.
We need to move from AI-assisted coding to AI-enabled architecture. The goal isn't to make your senior engineers type faster, but to enable them to build systems that are robust enough to handle the chaos of AI-generated code.
[...]
The popular 80/20 split — where AI does 80% of the work and humans do the 20% — is misleading. It implies the human part is just a finishing touch. In reality, that 20% is 100% of the value. It's the architecture, the security model and the business logic.
To escape the productivity trap, you need to direct your engineering leaders to focus entirely on this human 20%.
My own work has shifted away from writing features and toward defining the physics of our codebase. When I was at Uber, I spent a huge amount of time migrating our systems to use strict types and schemas.
[...]
This is the strategic shift. The role of the senior engineer is to build the compiler for the AI. They need to create the schemas, the type systems and the automated rules that constrain what the AI can do.
This transforms the almost-right problem. Instead of me manually reviewing code to find errors, the system rejects the code automatically if it doesn't fit the architecture. I stop being a reviewer and start being a legislator.
[...]
The AI productivity trap is real, but it's not inevitable. It's a symptom of applying a new technology using an old workflow. The path forward is rigorous, architectural and deeply human. It requires us to value the design and the constraint-setting as the true core of engineering.
As Brian Kernighan said, "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it". And now, a corollary: "If AI is smarter than you, who the hell is going to debug the code?"
FBI stymied by Apple's Lockdown Mode after seizing journalist's iPhone:
The Federal Bureau of Investigation has so far been unable to access data from a Washington Post reporter's iPhone because it was protected by Apple's Lockdown Mode when agents seized the device from the reporter's home, the US government said in a court filing.
FBI agents were able to access the reporter's work laptop by telling her to place her index finger on the MacBook Pro's fingerprint reader, however. This occurred during the January 14 search at the Virginia home of reporter Hannah Natanson.
As previously reported, the FBI executed a search warrant at Natanson's home as part of an investigation into a Pentagon contractor accused of illegally leaking classified data. FBI agents seized an iPhone 13 owned by the Post, one MacBook Pro owned by the Post and another MacBook Pro owned by Natanson, a 1TB portable hard drive, a voice recorder, and a Garmin watch.
Government investigators want to read Natanson's Signal messages, and were able to view at least some of them on her work laptop. The reporter has said she has a contact list of 1,100 current and former government employees in Signal, which she uses for encrypted chats.
The Justice Department described the search in a court filing that was submitted Friday in US District Court for the Eastern District of Virginia and noted in a 404 Media article today. The government filing opposes a motion in which the Post and Natanson asked the court to order the return of the seized devices. A federal magistrate judge previously issued a standstill order telling the government to stop searching the devices until the court rules on whether they must be returned.
"The iPhone was found powered on and charging, and its display noted that the phone was in 'Lockdown' mode," the government filing said. After the seized devices were taken to the FBI's Washington field office, the Computer Analysis Response Team (CART) "began processing each device to preserve the information therein," the filing said.
CART couldn't get anything from the iPhone. "Because the iPhone was in Lockdown mode, CART could not extract that device," the government filing said.
The government also submitted a declaration by FBI Assistant Director Roman Rozhavsky that said the agency "has paused any further efforts to extract this device because of the Court's Standstill Order." The FBI did extract information from the SIM card "with an auto-generated HTML report created by the tool utilized by CART," but "the data contained in the HTML was limited to the telephone number."
Apple says that LockDown Mode "helps protect devices against extremely rare and highly sophisticated cyber attacks," and is "designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats."
Introduced in 2022, Lockdown Mode is available for iPhones, iPads, and Macs. It must be enabled separately for each device. To enable it on an iPhone or iPad, a user would open the Settings app, tap Privacy & Security, scroll down and tap Lockdown Mode, and then tap Turn on Lockdown Mode.
The process is similar on Macs. In the System Settings app that can be accessed via the Apple menu, a user would click Privacy & Security, scroll down and click Lockdown Mode, and then click Turn On.
"When Lockdown Mode is enabled, your device won't function like it typically does," Apple says. "To reduce the attack surface that potentially could be exploited by highly targeted mercenary spyware, certain apps, websites, and features are strictly limited for security and some experiences might not be available at all."
Lockdown Mode blocks most types of message attachments, blocks FaceTime calls from people you haven't contacted in the past 30 days, restricts the kinds of browser technologies that websites can use, limits photo sharing, and imposes other restrictions. Users can exclude specific apps and websites they trust from these restrictions, however.
FBI agents had more success getting into Natanson's other devices, though the Justice Department complained that "Ms. Natanson misled investigators about the devices that were seized. She misrepresented to officers that the devices could not be unlocked with biometrics, possibly in order to prevent the Government from reviewing materials within the scope of the search warrant."
The Rozhavsky declaration said that during the home search, FBI agents "advised Natanson that the FBI could not compel her to provide her passcodes," but "the warrant did give the FBI authority to use Natanson's biometrics, such as facial recognition or fingerprints, to open her devices. Natanson stated that she did not use biometrics on her devices."
Natanson's personal MacBook Pro was powered off when it was found by FBI agents. The Post-owned MacBook Pro was found in a backpack in the kitchen and was powered on and locked. The FBI said an agent "presented Natanson with her open laptop" and "assisted" her in unlocking the device with her finger. The declaration described what happened as follows:
Natanson was reminded the FBI has authority to use her biometrics to unlock the laptop and Natanson repeated that she does not use biometrics on her devices. Natanson was told she must try, in accordance with the authorization in the warrant. The FBI assisted Natanson with applying her right index finger to the fingerprint reader which immediately unlocked the laptop.
In 2024, a federal appeals court ruled that the Constitution's Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan. That case involved a traffic stop, rather than a home search authorized by a warrant.
The FBI has so far been unable "to obtain a full physical image" of Natanson's work laptop, but did make a "limited partial live logical image," the government filing said. At least some of Natanson's Signal chat messages were set for auto-deletion, so FBI agents took photos and made audio recordings of the chats, but the government filing said this was done "only for preservation purposes and no substantive review has occurred."
The FBI apparently hasn't gotten any data from Natanson's personal computer. "Natanson's personal MacBook Pro is password protected and encrypted and therefore no imaging was effected. The FBI paused any further efforts because of [the] Court's Standstill Order. No review has occurred," Rozhavsky wrote.
The government said it processed data from the voice recorder and 1TB hard drive but has not reviewed the data yet. The Garmin watch wasn't processed before the court issued a standstill order; "therefore, no processing will occur until further order of the Court," the declaration said.